Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
AI algorithms are increasingly influencing decision-making in criminal justice, including tasks such as predicting recidivism and identifying suspects by their facial features. The increasing reliance on machine-assisted legal decision-making impacts the rights of criminal defendants, the work of law enforcement agents, the legal strategies taken by attorneys, the decisions made by judges, and the public’s trust in courts. As such, it is crucial to understand how the use of AI is perceived by the professionals who interact with algorithms. The analysis explores the connection between law enforcement and legal professionals’ stated and behavioral trust. Results from three rigorous survey experiments suggest that law enforcement and legal professionals express skepticism about algorithms but demonstrate a willingness to integrate their recommendations into their own decisions and, thus, do not exhibit “algorithm aversion.” These findings suggest that there could be a tendency towards increased reliance on machine-assisted legal decision-making despite concerns about the impact of AI on the rights of criminal defendants.more » « lessFree, publicly-accessible full text available June 1, 2026
-
Trust in elections is paramount for a democracy and citizens are more likely to cast ballots and support election results when they perceive election processes as trustworthy. However, with the advent of greater dependence on algorithms in election processes, we ask does a reliance on algorithms or a hybrid system for verifying signatures allay or increase citizens’ confidence in using them in elections? To answer this, we use unique survey experiments to first determine respondents’ comfort level in using such systems in elections and then to assess the circumstances which bound this trust. We find that respondents similarly trust automated and non-automated systems, but do not have a clear conception of the confidence threshold, set by policymakers, necessary for rejecting ballots. Additionally, respondents blame election officials more than algorithms when mistakes are made, although this result is contingent on the type of error and respondents’ partisanship. These results have significant implications for confidence in signature verification and other election processes that rely on artificial intelligence.more » « less
-
Abstract Public decision-makers incorporate algorithm decision aids, often developed by private businesses, into the policy process, in part, as a method for justifying difficult decisions. Ethicists have worried that over-trust in algorithm advice and concerns about punishment if departing from an algorithm’s recommendation will result in over-reliance and harm democratic accountability. We test these concerns in a set of two pre-registered survey experiments in the judicial context conducted on three representative U.S. samples. The results show no support for the hypothesized blame dynamics, regardless of whether the judge agrees or disagrees with the algorithm. Algorithms, moreover, do not have a significant impact relative to other sources of advice. Respondents who are generally more trusting of elites assign greater blame to the decision-maker when they disagree with the algorithm, and they assign more blame when they think the decision-maker is abdicating their responsibility by agreeing with an algorithm.more » « less
-
Abstract The use of algorithms and automated systems, especially those leveraging artificial intelligence (AI), has been exploding in the public sector, but their use has been controversial. Ethicists, public advocates, and legal scholars have debated whether biases in AI systems should bar their use or if the potential net benefits, especially toward traditionally disadvantaged groups, justify even greater expansion. While this debate has become voluminous, no scholars of which we are aware have conducted experiments with the groups affected by these policies about how they view the trade-offs. We conduct a set of two conjoint experiments with a high-quality sample of 973 Americans who identify as Black or African American in which we randomize the levels of inter-group disparity in outcomes and the net effect on such adverse outcomes in two highly controversial contexts: pre-trial detention and traffic camera ticketing. The results suggest that respondents are willing to tolerate some level of disparity in outcomes in exchange for certain net improvements for their community. These results turn this debate from an abstract ethical argument into an evaluation of political feasibility and policy design based on empirics.more » « less
-
Demeniconi; Carlotta; Nitesh V. Chawla (Ed.)The motives and means of explicit state censorship have been well studied, both quantitatively and qualitatively. Self-censorship by media outlets, however, has not received nearly as much attention, mostly because it is difficult to systematically detect. We develop a novel approach to identify news media self-censorship by using social media as a sensor. We develop a hypothesis testing framework to identify and evaluate censored clusters of keywords and a near-linear-time algorithm (called GraphDPD) to identify the highest-scoring clusters as indicators of censorship. We evaluate the accuracy of our framework, versus other state-of-the-art algorithms, using both semi-synthetic and real-world data from Mexico and Venezuela during Year 2014. These tests demonstrate the capacity of our framework to identify self-censorship and provide an indicator of broader media freedom. The results of this study lay the foundation for detection, study, and policy-response to self-censorship.more » « less
An official website of the United States government
